655 research outputs found

    System Identification of a Micro Aerial Vehicle

    Get PDF
    The purpose of this thesis was to implement an Model Predictive Control based system identification method on a micro-aerial vehicle (DJI Matrice 100) as outlined in a study performed by ETH Zurich. Through limited test flights, data was obtained that allowed for the generation of first and second order system models. The first order models were robust, but the second order model fell short due to the fact that the data used for the model was not sufficient

    Analyzing Crash Potential in Mixed Traffic with Autonomous and Human-Driven Vehicles

    Get PDF
    Reducing crash counts on saturated road networks is one of the most significant benefits behind the introduction of Autonomous Vehicle (AV) technology. To date, many researchers have studied how AVs maneuver in different traffic situations, but less attention has been paid to the car-following scenarios between AVs and human drivers. A mismatch in the braking and accelerating decisions in this car-following scenario can lead to rear-end near-crashes and therefore needs to be studied. This thesis aims to investigate the driving behavior of human-drivers that follow a designated AV leader in a car-following situation and compare the results with a scenario when the leader is a human-like driver. In this study, speed trajectory data was collected from 48 participants using a driving simulator. To estimate the near-crash risk between the participants and the leading vehicle, critical thresholds of six Surrogate Safety Measures (SSMs): Time to Collision (TTC), Inverse Time to Collision (ITTC), Modified Time to Collision (MTTC), Deceleration Rate to Avoid Crash (DRAC), critical jerk and Warning Index (WI), were used. The potential near-crash events and the safe driving events were classified using a random forest algorithm after performing oversampling and undersampling techniques. The results from the two-sample t-tests indicated a significant difference between the overall deceleration rates, braking speeds, and acceleration rates of the participants and the designated AV leader. However, no such difference was found between the participants and the human-like leader while braking and accelerating at stop-controlled intersections. Out of six SSMs, MTTC detected near-crash events 10 seconds before their actual occurrence at a range of 11.93 m with 83% accuracy. The surrogate measures identified a higher number of near-crash (high risk) events when the participants followed the designated AV and made braking maneuvers at the stop-controlled intersections. Based on the number of near-crash (high risk) events, the designated AV's C3.25 speed profile (with the maximum deceleration rate of 3.25 m/s2 ) posed the highest crash risk to the participants in the following vehicle. For potential near-crash events classification, a random forest classifier based on undersampled data achieved the highest average accuracy rate of 92.2%. The deceleration rates of the designated AV had the highest impact on the near-crashes between the AV and the participants. However, shorter clearances during the braking maneuvers at intersections significantly affected the near-crashes between the human-like leader and the participants in the following vehicle

    AIDS: The Dreadful Breach in the Immune System (World AIDS Day Guest Comment)

    Get PDF
    World AIDS Day Guest Comment by Dr. Aman Sharm

    Document Management System

    Get PDF
    The debate of using paper documents over computer documents has been a long topic of interest, there has been a lot of research done on the topic. Also, experiments have been done to evaluate the performance of users working with computers v/s working manually with no definitive conclusions (Chris Anderson, 2010) (Askwal, 1985) (Noyes and Graland, 2008). However, one cannot deny the time it takes to store and go through papers manually. Every year, new policies help the user understand the guidelines and operational procedures that they have to follow while being employed by a company. Keeping track of such documents as well as assigning tasks to certain users for revising such are tasks that are lengthy and cumbersome, and are not automated. The software proposed in this paper would help an organization maintain documents, reducing unnecessary tasks and the disk space on the machine

    Glyphosate Resistance of <em>Chloris virgata</em> Weed in Australia and Glyphosate Mobility Are Connected Problems

    Get PDF
    The purpose of this review paper is to address two major aspects of glyphosate application on farmers’ fields. The first aspect is the development of glyphosate resistance in weeds like Chloris virgata, and the second aspect is glyphosate mobility, which is directly controlled by soil sorption processes and indirectly by molecule degradation processes. This is a global problem, as excessive glyphosate residues in groundwater, drinking water, and urine of subsistence farmers from intensive agricultural localities have been reported, which can pose a risk to human health. Approaches like biochar as a possible strategy to control glyphosate leaching and crop competition as a cultural method to control glyphosate-resistant weed like Chloris virgata can be the potential solutions of the glyphosate resistance and glyphosate mobility

    Privacy preserving data mining

    Get PDF
    A fruitful direction for future data mining research will be the development of technique that incorporates privacy concerns. Specifically, we address the following question. Since the primary task in data mining is the development of models about aggregated data, can we develop accurate models without access to precise information in individual data records? We analyze the possibility of privacy in data mining techniques in two phasesrandomization and reconstruction. Data mining services require accurate input data for their results to be meaningful, but privacy concerns may influence users to provide spurious information. To preserve client privacy in the data mining process, techniques based on random perturbation of data records are used. Suppose there are many clients, each having some personal information, and one server, which is interested only in aggregate, statistically significant, properties of this information. The clients can protect privacy of their data by perturbing it with a randomization algorithm and then submitting the randomized version. This approach is called randomization. The randomization algorithm is chosen so that aggregate properties of the data can be recovered with sufficient precision, while individual entries are significantly distorted. For the concept of using value distortion to protect privacy to be useful, we need to be able to reconstruct the original data distribution so that data mining techniques can be effectively utilized to yield the required statistics. Analysis Let xi be the original instance of data at client i. We introduce a random shift yi using randomization technique explained below. The server runs the reconstruction algorithm (also explained below) on the perturbed value zi = xi + yi to get an approximate of the original data distribution suitable for data mining applications. Randomization We have used the following randomizing operator for data perturbation: Given x, let R(x) be x+€ (mod 1001) where € is chosen uniformly at random in {-100…100}. Reconstruction of discrete data set P(X=x) = f X (x) ----Given P(Y=y) = F y (y) ---Given P (Z=z) = f Z (z) ---Given f (X/Z) = P(X=x | Z=z) = P(X=x, Z=z)/P (Z=z) = P(X=x, X+Y=Z)/ f Z (z) = P(X=x, Y=Z - X)/ f Z (z) = P(X=x)*P(Y=Z-X)/ f Z (z) = P(X=x)*P(Y=y)/ f Z (z) Results In this project we have done two aspects of privacy preserving data mining. The first phase involves perturbing the original data set using ‘randomization operator’ techniques and the second phase deals with reconstructing the randomized data set using the proposed algorithm to get an approximate of the original data set. The performance metrics like percentage deviation, accuracy and privacy breaches were calculated. In this project we studied the technical feasibility of realizing privacy preserving data mining. The basic promise was that the sensitive values in a user’s record will be perturbed using a randomizing function and an approximate of the perturbed data set be recovered using reconstruction algorithm
    corecore